CS224d Project Final Report

نویسندگان

  • Elaina Chai
  • Neil Gallagher
چکیده

We develop a Recurrent Neural Network (RNN) Language Model to extract sentences from Yelp Review Data for the purpose of automatic summarization. We compare these extracted sentences against user-generated tips in the Yelp Academic Dataset using ROUGE and BLEU metrics for summarization evaluation. The performance of a uni-directional RNN is compared against word-vector averaging.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

CS224D Project Final Report Summarizing Reviews and Predicting Rating for Yelp Dataset

The report explores the use of Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) in summarising text reviews and predicting review rating for the Yelp dataset. I use the fact that the reviews are labelled (by rating) to extract important sentences/words, which are then used as the summary for the review. I use an interesting evaluation technique to measure the relevance of...

متن کامل

Author Attribution with CNN’s

In this report, the results from my CS224D final project are given and explained. The project was based on the application of relatively new neural network architectures, namely convolutional neural networks over word embeddings, to the task of authorship identification. The problem was posed as a classification task and models were evaluated over two datasets, a baseline of my own collection a...

متن کامل

CS224D Final Report: Deep Recurrent Attention Networks for LATEX to Source

For our project, we wanted to explore the problem of recognizing LTEX expressions and translating them to source using a deep neural network. Previous work involving attention models have improved sequence to sequence mappings and greatly helped in digit recognition. Inspired by these previous work, we implemented an attention model to recognize simple LTEX expressions and also tested it on a s...

متن کامل

Extensions to Tree-Recursive Neural Networks for Natural Language Inference

Understanding textual entailment and contradiction is considered fundamental to natural language understanding. Tree-recursive neural networks, which exploit valuable syntactic parse information, achieve state-of-the-art accuracy among pure sentence encoding models for this task. In this course project for CS224D, we explore two extensions to tree-recursive neural networks deep TreeLSTMs and at...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015